19009年的大流行破坏了世界上每个人的生活。在这项工作中,我们表征了在疫苗可用性之前,在大流行期间,美国112个城市的主观福祉模式,如与城市相对应的亚列表所示。我们使用积极和负面影响量化主观健康。然后,我们通过将社区观察到的健康与预期的健康进行比较,衡量大流行的影响,如大流行前的时间序列模型所预测的那样。我们表明,语言反映的一般社区特征可以预测社区的能力。我们预测大流行将如何基于正常时间\ textit {之前的语言和互动特征{}大流行的语言和互动特征影响每个社区的福祉。我们发现,具有与更紧密联系的用户相对应的互动特征的社区,并且更高的参与度受到显着影响。值得注意的是,我们发现更多谈论通常经验丰富的社会关系的社区,例如朋友,家人和隶属关系,实际上更有可能受到影响。此外,我们还使用相同的功能来预测大流行初次发作后每个社区将恢复的速度。我们同样发现,更多地谈论家庭,隶属关系和确定为团体一部分的社区的康复较慢。
translated by 谷歌翻译
Continually learning to segment more and more types of image regions is a desired capability for many intelligent systems. However, such continual semantic segmentation suffers from the same catastrophic forgetting issue as in continual classification learning. While multiple knowledge distillation strategies originally for continual classification have been well adapted to continual semantic segmentation, they only consider transferring old knowledge based on the outputs from one or more layers of deep fully convolutional networks. Different from existing solutions, this study proposes to transfer a new type of information relevant to knowledge, i.e. the relationships between elements (Eg. pixels or small local regions) within each image which can capture both within-class and between-class knowledge. The relationship information can be effectively obtained from the self-attention maps in a Transformer-style segmentation model. Considering that pixels belonging to the same class in each image often share similar visual properties, a class-specific region pooling is applied to provide more efficient relationship information for knowledge transfer. Extensive evaluations on multiple public benchmarks support that the proposed self-attention transfer method can further effectively alleviate the catastrophic forgetting issue, and its flexible combination with one or more widely adopted strategies significantly outperforms state-of-the-art solutions.
translated by 谷歌翻译
Summary quality assessment metrics have two categories: reference-based and reference-free. Reference-based metrics are theoretically more accurate but are limited by the availability and quality of the human-written references, which are both difficulty to ensure. This inspires the development of reference-free metrics, which are independent from human-written references, in the past few years. However, existing reference-free metrics cannot be both zero-shot and accurate. In this paper, we propose a zero-shot but accurate reference-free approach in a sneaky way: feeding documents, based upon which summaries generated, as references into reference-based metrics. Experimental results show that this zero-shot approach can give us the best-performing reference-free metrics on nearly all aspects on several recently-released datasets, even beating reference-free metrics specifically trained for this task sometimes. We further investigate what reference-based metrics can benefit from such repurposing and whether our additional tweaks help.
translated by 谷歌翻译
Embedding tables are usually huge in click-through rate (CTR) prediction models. To train and deploy the CTR models efficiently and economically, it is necessary to compress their embedding tables at the training stage. To this end, we formulate a novel quantization training paradigm to compress the embeddings from the training stage, termed low-precision training (LPT). Also, we provide theoretical analysis on its convergence. The results show that stochastic weight quantization has a faster convergence rate and a smaller convergence error than deterministic weight quantization in LPT. Further, to reduce the accuracy degradation, we propose adaptive low-precision training (ALPT) that learns the step size (i.e., the quantization resolution) through gradient descent. Experiments on two real-world datasets confirm our analysis and show that ALPT can significantly improve the prediction accuracy, especially at extremely low bit widths. For the first time in CTR models, we successfully train 8-bit embeddings without sacrificing prediction accuracy. The code of ALPT is publicly available.
translated by 谷歌翻译
Some recent works observed the instability of post-hoc explanations when input side perturbations are applied to the model. This raises the interest and concern in the stability of post-hoc explanations. However, the remaining question is: is the instability caused by the neural network model or the post-hoc explanation method? This work explores the potential source that leads to unstable post-hoc explanations. To separate the influence from the model, we propose a simple output probability perturbation method. Compared to prior input side perturbation methods, the output probability perturbation method can circumvent the neural model's potential effect on the explanations and allow the analysis on the explanation method. We evaluate the proposed method with three widely-used post-hoc explanation methods (LIME (Ribeiro et al., 2016), Kernel Shapley (Lundberg and Lee, 2017a), and Sample Shapley (Strumbelj and Kononenko, 2010)). The results demonstrate that the post-hoc methods are stable, barely producing discrepant explanations under output probability perturbations. The observation suggests that neural network models may be the primary source of fragile explanations.
translated by 谷歌翻译
Graph neural networks (GNNs) have recently emerged as a promising learning paradigm in learning graph-structured data and have demonstrated wide success across various domains such as recommendation systems, social networks, and electronic design automation (EDA). Like other deep learning (DL) methods, GNNs are being deployed in sophisticated modern hardware systems, as well as dedicated accelerators. However, despite the popularity of GNNs and the recent efforts of bringing GNNs to hardware, the fault tolerance and resilience of GNNs has generally been overlooked. Inspired by the inherent algorithmic resilience of DL methods, this paper conducts, for the first time, a large-scale and empirical study of GNN resilience, aiming to understand the relationship between hardware faults and GNN accuracy. By developing a customized fault injection tool on top of PyTorch, we perform extensive fault injection experiments to various GNN models and application datasets. We observe that the error resilience of GNN models varies by orders of magnitude with respect to different models and application datasets. Further, we explore a low-cost error mitigation mechanism for GNN to enhance its resilience. This GNN resilience study aims to open up new directions and opportunities for future GNN accelerator design and architectural optimization.
translated by 谷歌翻译
We study a double robust Bayesian inference procedure on the average treatment effect (ATE) under unconfoundedness. Our Bayesian approach involves a correction term for prior distributions adjusted by the propensity score. We prove asymptotic equivalence of our Bayesian estimator and efficient frequentist estimators by establishing a new semiparametric Bernstein-von Mises theorem under double robustness; i.e., the lack of smoothness of conditional mean functions can be compensated by high regularity of the propensity score and vice versa. Consequently, the resulting Bayesian point estimator internalizes the bias correction as the frequentist-type doubly robust estimator, and the Bayesian credible sets form confidence intervals with asymptotically exact coverage probability. In simulations, we find that this corrected Bayesian procedure leads to significant bias reduction of point estimation and accurate coverage of confidence intervals, especially when the dimensionality of covariates is large relative to the sample size and the underlying functions become complex. We illustrate our method in an application to the National Supported Work Demonstration.
translated by 谷歌翻译
人类机器人相互作用(HRI)是提高现代生产线灵活性的重要组成部分。但是,在实际应用程序中,任务(\ ie机器人需要操作的条件,例如环境照明条件,与人类的受试者进行互动以及硬件平台)可能会有所不同,并且在最佳方面仍然具有挑战性并有效地配置和调整这些不断变化的任务下的机器人系统。为了应对挑战,本文提出了一个任务不足的适应性控制器,可以1)适应不同的照明条件,2)适应个人行为并确保与不同的人互动时的安全性,3)3)启用具有不同机器人平台的轻松传输控制接口。使用FANUC LR MATE 200ID/7L机器人和Kinova Gen3机器人对人类机器人切换任务进行了测试。实验表明,所提出的任务无形控制器可以在不同任务上实现一致的性能。
translated by 谷歌翻译
常规作品通常采用两阶段模型,其中生成器选择最重要的部分,然后是根据所选零件进行预测的预测因子。但是,这样的两相模型可能会引起变性问题,其中预测变量过度适合尚未训练的发电机生成的噪声,然后导致发电机收敛到倾向于选择无意义的碎片的亚最佳模型。为了应对这一挑战,我们提出了折叠的合理化(FR),将理由模型的两个阶段折叠成一个文本语义提取的角度。FR的关键思想是在发电机和预测器之间采用统一的编码器,基于FR可以通过访问传统两相模型中发电机阻止的有价值的信息来促进更好的预测指标,从而带来更好的生成器。从经验上讲,我们表明,与最先进的方法相比,FR将F1得分提高了10.3%。
translated by 谷歌翻译
大多数现有的时间序列分类(TSC)模型缺乏可解释性,难以检查。可解释的机器学习模型可以帮助发现数据中的模式,并为域专家提供易于理解的见解。在这项研究中,我们提出了神经符号时间序列分类(NSTSC),这是一种利用信号时间逻辑(STL)和神经网络(NN)的神经符号模型,使用多视图数据表示并将模型表示为TSC任务人类可读,可解释的公式。在NSTSC中,每个神经元与符号表达相关,即STL(sub)公式。因此,NSTSC的输出可以解释为类似于自然语言的STL公式,描述了隐藏在数据中的时间和逻辑关系。我们提出了一个基于NSTSC的分类器,该分类器采用决策树方法来学习公式结构并完成多类TSC任务。 WSTL提出的平滑激活功能允许以端到端的方式学习模型。我们在来自UCR时间序列存储库中的小鼠和基准数据集的现实伤口愈合数据集上测试NSTSC,这表明NSTSC与最先进的模型实现了可比的性能。此外,NSTSC可以生成与域知识匹配的可解释公式。
translated by 谷歌翻译